Recurrent Neural-Network Learning of Phonological Regularities in Turkish
نویسنده
چکیده
Simple recurrent networks were trained with sequences of phonemes from a corpus of Turkish words. The network's task was to predict the next phoneme. The aim of the study was to look at the representations developed within the hidden layer of the network in order to investigate the extent to which such networks can learn phonological regularities from such input. It was found that in the different networks, hidden units came to correspond to detectors for natural phonological classes such as vowels, consonants, voiced stops, and front and back vowels. The initial state of the networks contained no information of this type, nor were these classes explicit in the input. The networks were also able to encode information about the temporal distribution of these classes. The network used is a simple recurrent network of the type first investigated by Elman (Elman, 1990). It consists of a feedforward network, supplemented with recurrent connections from the hidden layer. It was trained by the back-propagation learning algorithm (Rumelhart, Hinton and Williams, 1986). The ability of such networks to extract phonological structure is well established. For example, Gasser (Gasser, 1992) showed that a similar network could learn distributed representations for syllables when trained on words of an artificial language. Figure 1 shows the architecture of the network. Within this network architecture, four different network configurations were investigated. These all had 28 units in both the input and output layers; they varied only in the number of units in the hidden layer, ranging from two to five. All connections in the network have an inherent time delay of one time step. This has the result that the recurrent connections between units in the hidden layer give the network access to a copy of its hidden-layer activations at the previous time step. The delay also has the effect that it takes two time steps for any information to propagate from the input layer through to the output layer. The network is fully connected The input to the network is a series of sequentially presented phonemes from a corpus of 602 Turkish words. Each phoneme is represented by a 28-bit vector in which each of the 28 Turkish phonemes present in the corpus is represented by a different bit. Whenever a particular phoneme is present, the corresponding bit is flipped on. This sparse encoding scheme was taken from Elman (Elman, 1990), and ensures that each vector is …
منابع مشابه
997 Association for Computational Linguistics Recurrent Neural-network Learning of Phonological Regularities in Turkish 1 Network Architecture
Simple recurrent networks were trained with sequences of phonemes from a corpus of Turkish words. The network's task was to predict the next phoneme. The aim of the study was to look at the representations developed within the hidden layer of the network in order to investigate the extent to which such networks can learn phonological regularities from such input. It was found that in the diiere...
متن کاملInherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation
A recurrent neural network model of phonological pattern learning is proposed. The model is a relatively simple neural network with one recurrent layer, and displays biases in learning that mimic observed biases in human learning. Single-feature patterns are learned faster than two-feature patterns, and vowel or consonant-only patterns are learned faster than patterns involving vowels and conso...
متن کاملFronto-Parietal Contributions to Phonological Processes in Successful Artificial Grammar Learning
Sensitivity to regularities plays a crucial role in the acquisition of various linguistic features from spoken language input. Artificial grammar learning paradigms explore pattern recognition abilities in a set of structured sequences (i.e., of syllables or letters). In the present study, we investigated the functional underpinnings of learning phonological regularities in auditorily presented...
متن کاملComputational Modeling of Statistical Learning: Effects of Transitional Probability Versus Frequency and Links toWord Learning
Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network (SRN) performed much like human learners: it was s...
متن کاملA Recurrent Neural Network Model for solving CCR Model in Data Envelopment Analysis
In this paper, we present a recurrent neural network model for solving CCR Model in Data Envelopment Analysis (DEA). The proposed neural network model is derived from an unconstrained minimization problem. In the theoretical aspect, it is shown that the proposed neural network is stable in the sense of Lyapunov and globally convergent to the optimal solution of CCR model. The proposed model has...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1997